perm filename FRM.MSG[P,JRA] blob sn#547795 filedate 1980-11-26 generic text, type C, neo UTF8
COMMENT ⊗   VALID 00007 PAGES
C REC  PAGE   DESCRIPTION
C00001 00001
C00002 00002	∂28-Oct-76  2107	RPG  	BOOK
C00009 00003	∂09-Jan-79  1542	RPG  	MacLisp matcher    
C00012 00004	∂03-Mar-77  1603	FTP:GLS at MIT-AI (Guy L. Steele, Jr. )	Knight displays
C00016 00005	∂27-Feb-78  1706	HPM  	more video    
C00022 00006	∂02-May-78  1245	MG  	ML (LCF metelanguage)    
C00024 00007	∂20-Jan-79  1754	RZ at MIT-MC (Richard E. Zippel)   
C00030 ENDMK
C⊗;
∂28-Oct-76  2107	RPG  	BOOK
	I have finished the first part of the book and have the following
general comments:
	The parts on data structuring seemed to me to be forced. It may
be that I have some residual bigotry on my part, but I felt that it
didn't roll to easily.
	On the other hand, I felt that the sections on EVALuation and
spaghetti were among the best I have read in that area (i.e. excellent).
The implementations were quite understandable and the points were well
made. 
	More specifically, the comparison or analogy between mathematical
definitions and data structuring appeared to be very unnatural, although
I felt that the points made were good. 
	On spaghetti, my feeling is that it is undoubtedly something for
people who use LISP should know about, but it may not be something that
is ever useful in practice. I have personally never known anyone who has
used it in anything like a production program (except me), and almost none
who have even used it except to see how it works. It seems that there is
always a better way to achieve ones ends than by a general purpose &
inefficient features. Spaghetti invariably saves too much information, tends
to favor lexical variables (especially when you usually want fluid), and forces
laziness on the part of the hacker. For instance, I wrote a hairy pattern
matcher once which required real bactracking in a car/cdr recursion (not a
simple tree search). The matcher often had to bactrack into a control frame
that had been exited. Well, in LISP370 I used spaghetti, and in MACLISP I
explicitly used continuations, which clarified control and was fairly efficient
through the compiler. (the idea was to make the cdr recursion a sub-computation
of the car recursion as opposed to a sister computation). I plan to recode
the LISP370 version soon. 
	Another point concerns the use of continuations in the book. It
seems that they only appear briefly, as a passing concept between recursion/
subroutine returns and PC control. Well, either continuations should be
presented as a programming skill on its own, or it should be flushed as
a puzzling interlude, or some comment might be derived about its intermediary
position wrt stack control style and Program Counter style.
	I am always puzzled about the use of a "LISP Metalanguage" which is 
separate from the S-expression notation. I am not questioning the usefullness
of the metalanguage as a theoretical tool of great importance, but the 
of it as a pedagogical tool. When I TAed 206 for McCarthy, I saw the
bewilderment it caused when the two notations were introduced in parallel.
A point this "algolishness" seems to obscure is the natural ambiguity inherent
in LISP, namely between the program and the data. Now, I am definitely
not trying to say that it is the ability of LISP to write programs and execute
them which makes i such a nifty language, but that the ability to cons up
certain restricted expressions which can be later applied on a small scale
(that is the handles on the evaluator) that make it a useful language.
	A point that you don't seem to make is that LISP is an environment that
the user can sit inside of while his programs suffer trauma, and not a
compiler-based language in which you create your poooooor program, and boot
it into the confusing world of the compiler, tto suffer alone in the cruel
world.
	I guess LISP is naturally a hacker's programming language, in that
every nifty hook is available, and you can either hang yourself or not
according to your own abilities.
	An important aspect of any comprehensive discussion of LISP is why
it is that such an obscure looking language should be preferred by AI
hackers.
	Well, I look forward to the next sections of the book, and will
put the commented first part in your mailbox, if you have one, or mine.
				-rpg-

∂09-Jan-79  1542	RPG  	MacLisp matcher    
To:   "@USERS.DIS[AID,RPG]" at SU-AI  
Here's some changes to %MATCH you matcher fans:
;;;;;;;;;; the matching function ;;;;;;;;;; 
;;;
;;; (arg 1) - p -     pattern
;;; (arg 2) - d -     data
;;; (arg 3) - alist - optional list of variables (* or ?) whose values
;;; 		      are to be retained during the match, much like the
;;;		      = variables below.
;;; elements of a pattern:
;;;	? 	- matches anything
;;;	* 	- matches one or more expressions
;;;	?<atom> - like "?", but sets ?<atom> to thing matched
;;;	*<atom>	- like "*", but sets *<atom> to list of things matched
;;;	=<atom>	- matched against value of <atom>
;;;	(restrict <one of above ?-variables> <pred1> <pred2> .....)
;;;		- the predi must eval to non-nil
;;;	$r, ⊗r  - same as RESTRICT
;;;	(restrict <one of above *-variables> <pred1> <pred2> .....)
;;;		- the predi must eval to non-nil when given the list
;;;		  that is being considered for that variable as its argument
;;;	(irestrict <one of above *-variables> <pred1> <pred2> .....)
;;;		- the predi must eval to non-nil when given each element of the list
;;;		  that is being considered for that variable as its argument 
;;;		  (done incrementally). So %MATCH will apply these predicates as
;;;		  it scans the input.
;;;	$ir,⊗ir - same as irestrict
;;;
;;; (%match p d <variables to retain>) attempts to match p against d
;;; (%continue-match p d <variables to retain>) attempts to get the next
;;;		  possible match between p and d (by different *-variable
;;;		  bindings.
;;*PAGE

∂03-Mar-77  1603	FTP:GLS at MIT-AI (Guy L. Steele, Jr. )	Knight displays
Date: 3 MAR 1977 1904-EST
Sender: GLS at MIT-AI
From: GLS at MIT-AI (Guy L. Steele, Jr. )
Subject: Knight displays
To: jra at SU-AI
CC: GLS at MIT-AI
Message-ID: <[MIT-AI].70877>

Well, a Knight display terminal consists of a TV monitor and
a separate keyboard (made by Microswitch).  The TV gives something
like 450 by 380 raster (i forget exact figures); using a 6x12
character matrix, this gives 96 chars wide and 37 or so lines.
The TV screen is repetitively refreshed on a bit basis from
a memory.
There are far more terminals than can be used at once - they are cheap
enough to have one in every office, naturally.  There are
14 "video buffers" of 16K 16-bit words apiece (room for
expansion to 16 buffers).  There is a crossbar (the video switch)
which connects bit sources to bit sinks (16x32).  Currently
the only sources are the video buffers, and sinks include all
the terminals and a Tektronix copier.
The video buffers hang off a unibus interface on a PDP-11.
A register controls which one is accessible to the unibus
(the PDP-11 also has 12K of normal memory).  The PDP-11 in turn
hangs off the MIT-AI pdp-10 to pdp-11 inteface, which makes
PDP-11 memory addressable from the PDP-10.  In this way
a user program, after making a number of system calls, can
write directly into the bits of the TV screen:
the user memory refeence is mapped by the paging box to a magic
address in high memory (location 2,,xxxxxx); the 10-11 interface
maps it to the correct PDP-11; the unibus interface maps it
to the correct TV buffer; and voila!
The unibus interface for the buffers has an ALU inside of it
which is sometimes useful; it can do an IORM faster for you
than the PDP-10 can.
The kayboards are similar to Stanford keyboards,
but I think have a few more keys.  They generate 15 bits
(64 code-generating keys @ 6 bits, shift lock, and left/right
each of ctrl/meta/top/shift).  The PDP-11 immediately folds this to 12
bits (top,shift lock, shift, meta, ctrl, and 7 bits of ascii)
and passes it to the PDP-10.  User programs can read them
in 12-bit or 7-bit mode.  User programs that read 12 bits
conventionally fold them down to 9 bits (meta, ctrl, ascii)
though they don't have to.
that answerthe question?  if not, tk@ai himself can probably helpp more.
			-- Guy

∂27-Feb-78  1706	HPM  	more video    
1. the xgp-to-gray sacle algorithm ( i know you told me once, but
    (blush, blush) i forgot.

2. given the gray scale channels, how much  smoothing does the 
    video synthesizer do.

*******
imagine a page of XGP output. It is 8 1/2 by 11 inches and has 200
dots per linear inch in each direction, each dot being black or white.
Call a black dot a 1 and a white spot a 0. The page is then an array
of 2200 rows and 1700 columns of bits.

Each data disc channel is conceptually organized into 480 rows and 512
columns of bits. There are 32 of these channels altogether, all producing
bits simultaneously.  Normally they are used to operate 32 different tv
screens, each a 480 by 512 by one bit display. Eight of these channels go
to a digital to analog converter. This D/A is thus provided with eight
bits for each of the 480x512 points on a TV screen. It takes these
8 bits and translates them to one of 2↑8 grey level values. Thus the
output of the D/A is a tv image with 480x512 samples, each one of 2↑8
shades of grey. The data-disc:D/A converter is called the video
synthesizer. There is no additional smoothing except perhaps that
caused by the limited frequency response of the video amplifiers
anlong the way to your TV screen, and by the finite spot size of
the electron beam in your CRT.

The XGP dot array is more than three times as wide the data disc
array, and more than four times as tall. Also, the TV screens are
wider than they are tall, making them just about right for displaying
a half page of XGP output. A half page is still over 3 times as wide
and 2 times as tall as the DD array. By throwing away some of the
(hopefully) blank border on the page, the ratios are made exactly
3 and 2. So now imagine that each DD dot must somehow represent
a little 3 by 2 subarray of the XGP array. If we sum up the number
of 1's in each of these subarrays, we get numbers from 0 to 6.
If 0 represents white and 6 represents black, with the numbers
in between representing proportional shades of grey, we can display
the resulting pattern on the video synthesizer. It looks like
a blurred version of the XGP page. That's half density XGPSYN.
Full density turns the page on its side and compresses the
page length by a factor of 4 (instead of 2) and the width by
3, as for half density. Double density does 4 lengthwise and
6 sideways (giving numbers from 0 through 24).

The simple summing over subrectangles is a filtering. Other
more complicated filtering algorithms, that involve shading some
adjacent dots, might give even better results [CACM V20 #11 799:805].
*******


3. where, oh where, might i find the video synthesizer details
    and even perhaps the xgp synthesizer listings?

*******
The source for XGPSYN is XGPSYN.SAI[PIX,HPM], though I don't think
it will be too helpful.
*******

these and other questions can only  be answered by YOU. 
besides, the gratitude of the huddled masses, I'd even autograph
a  copy of my forthcoming (and outgoing) LISP book.

*******
wow!
*******
					i await pregnantly,
					john




COMMENT ⊗   VALID 00001 PAGES
C REC  PAGE   DESCRIPTION
C00001 00001
C00002 ENDMK
C⊗;
∂02-May-78  1245	MG  	ML (LCF metelanguage)    
To play with ML do "R ELCF" then when you get the LISP prompt
* do "(TML)" (u.c.) and then respond with <return> to both "THEORY?"
and "DRAFT?".e.g. simulated interaction:

.r elcf

*(TML)

LCF version 5 issued 27-oct-77
(with simultaneous substitution and new simplification)


THEORY?*

DRAFT?*

* .......
.........
..........etc.eg
.
.
.*letrec map fn l = null l => [] | (fn (hd l)).map fn (tl l));;..


----------------------------------------------------------

The ML prompt should be "#" but to do this needs (DE PROMPT (N) ...?...)
--eg see PROMPT[LCF,FWH] for the Edinburgh hack.




See yah....Mike

∂02-May-78  1258	MG  	erratum   
"fn(hd l).map fn (tl l)" is more correct - ihad an extra  bracket before

∂20-Jan-79  1754	RZ at MIT-MC (Richard E. Zippel)   
Date: 20 JAN 1979 2056-EST
From: RZ at MIT-MC (Richard E. Zippel)
Sent-by: RLB at MIT-MC
To: JRA at SU-AI, carl at MIT-AI
CC: RWK at MIT-MC, RLB at MIT-MC, HIC at MIT-MC, RZ at MIT-MC


This is to inform you that some subset of us will be submitting a
paper for the August issue of BYTE.  The title will probably be
something like "LIL - A Lisp Implementation Language". We will
present a Lisp based compiler for implementing system software.
With the language (compiler and perhaps interpreter) embedded in
a Lisp-like environment, the user will have access to the Lisp
debugging tools.  We feel that the ability to manipulate programs
via macros coded in Lisp will speed the code writing phase.  This
language should be usable on micro-processors without difficulty.


Here is a preliminary outline of our proposed LIL paper.  It would help us
a lot if you could give us some idea of how much space we can expect for
this paper, and how many diagrams it would be good to include, and of course
the deadlines.

Preliminary LIL paper outline

1. WHY MESS WITH LIL?
   a. Motivation - what is an implementation language?  Why do we need one?
   b. LIL is not X for X in {LISP, BLISS, C, Pascal, PL/I, assembler, etc.}
   c. Post parse-time macros (as opposed to textual substitution pre-parse-time
      macros)
      i.   Used for user-extentions to the control structure of the language
      ii.  Used to extend the semantics of the language
      iii. Used to extend the data structures which the language can handle.
   d. Debugging aids via interpretation in LISP.
   e. Stand alone debugging aids.
   f. Extra control structures not found in other languages.

2. WHAT IS LIL?
   a. Overture - What LIL offers, generally - functions, variables, pointers,
      special forms (control-sturcture contstructs)
   b. Syntax - what there is of it
      i.   lexical scan is easy, (), prefix characters ".'  - symbols/number
      ii.  parse is trivial, explicit list notation
      iii. all built into LISP's READ function.
   c. Basic semantics
      i.   Function calling
      ii.  Variables
      iii. . and ' (pointer-follower and address prefix)
   d. Special forms (case, select, do, cond, and, or, etc.)
   e. Macros - written in LISP, not LIL!
      i.   LISP is a good language for manipulating LIL code.  (Or other code)
      ii.  Full interpreter available in the compiler.
   f. Micros down to machine code level.
      i.   Give hands-on access to special capabilities of the host machine
      ii.  Enforce transportability; all machine-dependent code is in the form
	   of  micros; these can be re-coded for new machine, the machine
	   dependencies have been isolated.
      iii. Full user-control of compilation
   g. User-defined optimizations.

3. COMPILER
   a. Why write compiler in LISP
   b. Optimization

4. INTERPRETER
   a. Incremental debugging
   b. Single step debugging, at source-code level, not machine-level.
   c. Why you want both a compiler and an interpreter

5. COMPARE & CONTRAST LIL WITH OTHER IMPLEMENTATION LANGUAGES
   a. Detail here that couldn't be included in section 1.
   b. Sample program expressed in several such languages
      (Perhaps do the core of a BASIC interpreter)
   c. BLISS as LIL; show how LISP makes language manipulations easy

6. EXAMPLE AND DESCRIPTION OF WHAT WE'VE IMPLEMENTED
   a. Not a lot yet, but we're trying.